城市河流提供了影响住宅生活的水环境。河流表面监测对于决定在哪里确定清洁工作以及何时自动开始清洁处理至关重要。我们专注于有机泥浆或“浮渣”,该泥浆积聚在河流的表面上,并给予其特殊的气味和对景观的外部经济影响。由于其具有稀疏分布和不稳定的有机形状模式的特征,因此很难自动进行监测。我们建议使用混合图像增强物进行斑块分类管道,以检测河流表面上的浮渣特征,以增加漂浮在河流上的浮渣与附近建筑物,例如建筑物,桥梁,杆子和障碍物(如建筑物,桥梁和障碍物)所反映的河流背景的多样性。此外,我们建议在河流上覆盖的浮渣索引,以帮助在线监视较差的等级,收集浮渣并决定化学处理政策。最后,我们展示了如何在每十分钟的时间序列数据集中使用框架的时间序列数据集录制河流浮渣事件。我们讨论管道的价值及其实验发现。
translated by 谷歌翻译
Several techniques to map various types of components, such as words, attributes, and images, into the embedded space have been studied. Most of them estimate the embedded representation of target entity as a point in the projective space. Some models, such as Word2Gauss, assume a probability distribution behind the embedded representation, which enables the spread or variance of the meaning of embedded target components to be captured and considered in more detail. We examine the method of estimating embedded representations as probability distributions for the interpretation of fashion-specific abstract and difficult-to-understand terms. Terms, such as "casual," "adult-casual,'' "beauty-casual," and "formal," are extremely subjective and abstract and are difficult for both experts and non-experts to understand, which discourages users from trying new fashion. We propose an end-to-end model called dual Gaussian visual-semantic embedding, which maps images and attributes in the same projective space and enables the interpretation of the meaning of these terms by its broad applications. We demonstrate the effectiveness of the proposed method through multifaceted experiments involving image and attribute mapping, image retrieval and re-ordering techniques, and a detailed theoretical/analytical discussion of the distance measure included in the loss function.
translated by 谷歌翻译
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the communication cost of any algorithm that performs optimally in a regret minimization setup. We then propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server. We prove that the communication cost of DisBE-LUCB matches our lower bound up to logarithmic factors. In particular, for scenarios with known context distribution, the communication cost of DisBE-LUCB is only $\tilde{\mathcal{O}}(dN)$ and its regret is ${\tilde{\mathcal{O}}}(\sqrt{dNT})$, which is of the same order as that incurred by an optimal single-agent algorithm for $NT$ rounds. We also provide similar bounds for practical settings where the context distribution can only be estimated. Therefore, our proposed algorithm is nearly minimax optimal in terms of \emph{both regret and communication cost}. Finally, we propose DecBE-LUCB, a fully decentralized version of DisBE-LUCB, which operates without a central server, where agents share information with their \emph{immediate neighbors} through a carefully designed consensus procedure.
translated by 谷歌翻译
在本文中,我们考虑了第一和二阶技术来解决机器学习中产生的连续优化问题。在一阶案例中,我们提出了一种从确定性或半确定性到随机二次正则化方法的转换框架。我们利用随机优化的两相性质提出了一种具有自适应采样和自适应步长的新型一阶算法。在二阶案例中,我们提出了一种新型随机阻尼L-BFGS方法,该方法可以在深度学习的高度非凸起背景下提高先前的算法。这两种算法都在众所周知的深度学习数据集上进行评估并表现出有希望的性能。
translated by 谷歌翻译
对于神经网络的近似贝叶斯推断被认为是标准培训的强大替代品,通常在分发数据上提供良好的性能。然而,贝叶斯神经网络(BNNS)具有高保真近似推断的全批汉密尔顿蒙特卡罗在协变速下实现了较差的普遍,甚至表现不佳的经典估算。我们解释了这种令人惊讶的结果,展示了贝叶斯模型平均值实际上如何存在于协变量的情况下,特别是在输入特征中的线性依赖性导致缺乏后退的情况下。我们还展示了为什么相同的问题不会影响许多近似推理程序,或古典最大A-Bouthiori(地图)培训。最后,我们提出了改善BNN的鲁棒性的新型前锋,对许多协变量转变来源。
translated by 谷歌翻译
通过更好地了解多层网络的损失表面,我们可以构建更强大和准确的培训程序。最近发现,独立训练的SGD解决方案可以沿近持续训练损失的一维路径连接。在本文中,我们表明存在模式连接的单纯复合物,形成低损耗的多维歧管,连接许多独立培训的型号。灵感来自这一发现,我们展示了如何有效地建立快速合奏的单纯性复杂,表现优于准确性,校准和对数据集移位的鲁棒性的独立培训的深度集合。值得注意的是,我们的方法只需要几个训练时期来发现低损失单纯乳,从预先接受训练的解决方案开始。代码可在https://github.com/g-benton/loss-surface-simplexes中获得。
translated by 谷歌翻译